The Open Group Guide

Security Architecture Principles

tog-horiz-cmyk

In collaboration with The SABSA® Institute


Copyright © 2018, The Open Group

The Open Group hereby authorizes you to use this document for any purpose, PROVIDED THAT any copy of this document, or any part thereof, which you make shall retain all copyright and other proprietary notices contained herein.

This document may contain other proprietary notices and copyright information.

Nothing contained herein shall be construed as conferring by implication, estoppel, or otherwise any license or right under any patent or trademark of The Open Group or any third party. Except as expressly provided above, nothing contained herein shall be construed as conferring any license or right under any copyright of The Open Group.

Note that any product, process, or technology in this document may be the subject of other intellectual property rights reserved by The Open Group, and may not be licensed hereunder.

This document is provided “AS IS” WITHOUT WARRANTY OF ANY KIND, EITHER EXPRESSED OR IMPLIED, INCLUDING, BUT NOT LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY, FITNESS FOR A PARTICULAR PURPOSE, OR NON-INFRINGEMENT. Some jurisdictions do not allow the exclusion of implied warranties, so the above exclusion may not apply to you.

Any publication of The Open Group may include technical inaccuracies or typographical errors. Changes may be periodically made to these publications; these changes will be incorporated in new editions of these publications. The Open Group may make improvements and/or changes in the products and/or the programs described in these publications at any time without notice.

Should any viewer of this document respond with information including feedback data, such as questions, comments, suggestions, or the like regarding the content of this document, such information shall be deemed to be non-confidential and The Open Group shall have no obligation of any kind with respect to such information and shall be free to reproduce, use, disclose, and distribute the information to others without limitation. Further, The Open Group shall be free to use any ideas, concepts, know-how, or techniques contained in such information for any purpose whatsoever including but not limited to developing, manufacturing, and marketing products incorporating such information.

If you did not obtain this copy through The Open Group, it may not be the latest version. For your convenience, the latest version of this publication may be downloaded at www.opengroup.org/library.

 

The Open Group Guide

Security Architecture Principles

ISBN:                               TBA

Document Number:     TBA

 

Published by The Open Group, <Month> 2018.

Comments relating to the material contained in this document may be submitted to:

The Open Group, Apex Plaza, Forbury Road, Reading, Berkshire, RG1 1AX, United Kingdom

or by electronic mail to:

ogspecs@opengroup.org

Contents

1       Principle 1: Business Risk-Driven Security. 2

2       Principle 2: Context 4

3       Principle 3: Scope. 7

4       Principle 4: Intelligence. 8

5       Principle 5: Trust 10

6       Principle 6: Holistic Approach. 12

7       Principle 7: Simplicity. 13

8       Principle 8: Reuse. 19

9       Principle 9: Resilience. 23

10     Principle 10: Process-Driven. 25

11     Principle 11: Optimal Conflict Resolution. 26

12     Principle 12: Communication Clarity. 28

13     Principle 13: Usability. 30

14     Principle 14: Security by Design. 32

15     Principle 15: Precedence. 35

16     Principle 16: Device Sovereignty. 36

17     Principle 17: Defense in Depth. 38

18     Principle 18: Least Privilege. 41

19     Principle 19: Access Control 42

20     Principle 20: Communication Security. 45

 

Preface

The Open Group

The Open Group is a global consortium that enables the achievement of business objectives through technology standards. Our diverse membership of more than 600 organizations includes customers, systems and solutions suppliers, tools vendors, integrators, academics, and consultants across multiple industries.

The Open Group aims to:

·         Capture, understand, and address current and emerging requirements, establish policies, and share best practices

·         Facilitate interoperability, develop consensus, and evolve and integrate specifications and open source technologies

·         Operate the industry’s premier certification service

Further information on The Open Group is available at www.opengroup.org.

The Open Group publishes a wide range of technical documentation, most of which is focused on development of Open Group Standards and Guides, but which also includes white papers, technical studies, certification and testing documentation, and business titles. Full details and a catalog are available at www.opengroup.org/library.

This Document

This document is The Open Group Guide to Security Architecture Principles. It has been developed and approved by The Open Group.

This Guide is intended to be the first in a series of documents on Security Architecture.

Trademarks

ArchiMate®, DirecNet®, Making Standards Work®, OpenPegasus®, Platform 3.0®, The Open Group®, TOGAF®, UNIX®, UNIXWARE®, and the Open Brand X® logo are registered trademarks and Boundaryless Information Flow™, Build with Integrity Buy with Confidence™, Dependability Through Assuredness™, Digital Practitioner Body of Knowledge™, DPBoK™, EMMM™, FACE™, the FACE™ logo, IT4IT™, the IT4IT™ logo, O-DEF™, O-PAS™, Open FAIR™, Open O™ logo, Open Platform 3.0™, Open Process Automation™, Open Trusted Technology Provider™, SOSA™, and The Open Group Certification logo (Open O and check™) are trademarks of The Open Group.

COBIT® is a registered trademark of ISACA and the IT Governance Institute.

GitHub™ is a trademark of GitHub, Inc.

Linux® is a registered trademark of Linus Torvalds in the US and other countries.

Netflix® is a registered trademark of Netflix, Inc.

OASIS™, SAML™, and XACML™ are trademarks of OASIS, the open standards consortium.

SABSA® is a registered trademark and Business Stack™ and Business Attribute Profile™ are trademarks of The SABSA Institute.

All other brands, company, and product names are used for identification purposes only and may be trademarks that are the sole property of their respective owners.

Acknowledgements

The Open Group gratefully acknowledges the contribution of the following people in the development of this document:

·         Principal authors John Sherwood and Stephen T. Whitlock, with additional material supplied by Tony L. Carrato, Dennis Taylor, John Carraway, and David Lohnes

·         Principal editor David Lohnes

·         Guidance and feedback from members of The Open Group Security Forum with additional suggestions from The Open Group architecture experts

·         Security Forum Director John “Jay” Spaulding

 

Referenced Documents

The following documents are referenced in this Guide.

(Please note that the links below are good at the time of writing but cannot be guaranteed for the future.)

·         A Long Day Steeped in Pomp, History, and Emotion, J. Zeleny, The New York Times, January 20, 2009; retrieved from www.nytimes.com

·         Carl Ellison, Ceremonies, CRYPTO 2005, August 16, 2005; see www.iacr.org/conferences/crypto2005/r/48.mov and http://world.std.com/~cme/Ceremonies.ppt

·         Enterprise Security Architecture: A Business-Driven Approach, John Sherwood, Andrew Clark, David Lynas, CRC Press, 2005

·         Future Crimes: Everything is Connected, Everyone is Vulnerable, and What we Can do About it, Marc Goodman, Doubleday, 2015

·         ISO 31000:2018: Risk Management – Guidelines; refer to: www.iso.org/standard/65694.html

·         ISO/IEC/IEEE 42010:2011: Systems and Software Engineering – Architecture Description; refer to: www.iso.org/standard/50508.html

·         Jericho Forum® Commandments, White Paper (W124), May 2007, published by The Open Group; refer to: www.opengroup.org/library/w124

·         NIST Special Publication 800-37, Revision 1: Guide for Applying the Risk Management Framework to Federal Information Systems: A Security Life Cycle Approach, 2014

·         NIST Special Publication 800-53, Revision 4: Security and Privacy Controls for Federal Information Systems and Organizations, 2015

·         PKI: It’s Not Dead, Just Resting, Peter Gutmann, University of Auckland, 2002; refer to: www.cs.auckland.ac.nz/~pgut001/pubs/notdead.pdf

·         The Cryptographic Mathematics of the Enigma, Dr. A.R. Miller, Center for Cryptologic History, National Security Agency, Fort George G. Meade, Maryland (2012)

·         The Cuckoo’s Egg: Tracking a Spy Through the Maze of Computer Espionage, Clifford Stoll, Doubleday, 1989

·         The Turing Bombe, Frank Carter, Bletchley Park Trust Report No. 16, January 2000

·         The TOGAF® Standard, Version 9.2, a standard of The Open Group (C182), published by The Open Group, April 2018; refer to: www.opengroup.org/library/c182

·         US National Institute for Standards and Technology (NIST) Cybersecurity Framework (CSF 15); refer to: www.nist.gov/cyberframework

The following documents provide useful background material:

·         Against the Gods: The Remarkable Story of Risk, Peter L. Bernstein, Wiley, 1998

·         Fooled by Randomness: The Hidden Role of Chance in Life and in the Markets, Nassim Nicholas Taleb, Penguin, 2007

·         Measuring and Managing Information Risk: A FAIR Approach, Jack Freund, Jack Jones, Butterworth-Heinemann, 2014

·         NIST Special Publication 800-30, Revision 1: Guide for Conducting Risk Assessments, 2012

·         NIST Special Publication 800-39: Managing Information Security Risk: Organization, Mission, and Information System View, 2011

·         The Black Swan: The Impact of the Highly Improbable, Nassim Nicholas Taleb, Penguin, 2008

·         The Failure of Risk Management: Why it’s Broken and How to Fix it, Douglas W. Hubbard, Wiley, 2009

·         The Flaw of Averages: Why we Underestimate Risk in the Face of Uncertainty, Sam L. Savage, Wiley, 2012

 


Introduction

Information or Cybersecurity Architecture, like general-purpose IT Architecture, starts with a set of principles. Principles are idealistic statements of perfection. Typically they are not totally achievable but, rather like a distant lighthouse, guide the direction of IT development activities. The TOGAF® standard describes principles in Chapter 20 as:

(Architecture) Principles are general rules and guidelines, intended to be enduring and seldom amended, that inform and support the way in which an organization sets about fulfilling its mission.

Principles can have a significant positive impact when applied to ICT architecture and design. Rather than mandating, prescribing, or micro-managing specific architectural aspects or technologies of an ICT solution, principles provide flexible guidance that can be adapted to specific circumstances.

While general adoption of these principles is advised, business affordability, design constraints, technology availability, regulatory requirements, and other restrictions will limit the extent to which they can be employed. Decisions to violate these principles, which will often degrade security, should be made after having given careful thought and analysis of the change in risk that results from weakening the application of the principle. This is normal. These issues are addressed directly in Principle 11: Optimal Conflict Resolution.

With that in mind, several IT Security Architectural principles are presented. Some have been in common but informal use from the beginnings of the field of information security, some are taken from the work of the Jericho Forum® (which produced several innovative sets of principles), others have been derived from the work of The SABSA® Institute, and a few are included that derive from personal experiences. Rather than just stating these principles in a list, some explanations, discussions, illustrations, and examples are provided.

The astute reader will also notice that a number of these principles overlap or intertwine with each other to produce a sum greater than the individual parts. There are possibly many other axioms that could be proposed as principles, but quality degrades with the increased quantity until we are left with an increasing number of things that are common sense.

These principles are ordered so that those with a business focus are listed first. These are followed by the more general (*ilities) – those that are more architectural in nature in the front half of the list, and those that are more design-focused in the second half of the list. So, there is a general thought progression from Business to Architecture to Design. But all of these principles are important for the creation of IT services that will endure the complexities of the modern business environment and the ravages of the increase in cyber-attacks.

One of the differentiators of Security Architecture from other kinds of IT and Enterprise Architecture is that the architecture must presume the existence of adversarial intelligence at work against the System of Interest (SOI). Adversarial thinking is required for successful Security Architecture. It is intended that collectively these principles provide the basis for strong Security Architecture, but it is incumbent on the architect to always remember their adversary and to think proactively “from the outside” looking in.

1                        Principle 1: Business Risk-Driven Security

Security Architecture shall support business goals by enabling maximum gain potential while minimizing loss potential.

Traditionally, information security has been understood as a discipline for loss prevention – as a means to mitigate certain kinds of risks.

And while this is true, it is essential for a Security Architect to keep in mind that an organization’s assets do not exist simply to be protected; they exist to create value, and in most cases leveraging an asset to create value means exposing that asset to risk.

Security professionals – including information security professionals – are of necessity often preoccupied by possible negative outcomes, while those they protect – including the business leaders who own digital assets – focus on the positive outcomes they are pursuing. To provide the best architecture, Security Architects need to see their contributions to an organization not just in terms of the negative outcomes they prevent, but in terms of the positive outcomes they enable.

The disciplines of information security are new, but the art of securing things is old, and this principle – like many of the principles that guide a good Security Architect – is rooted in realities that govern even old-fashioned physical security and the part it plays in managing risk.

The following example from the domain of physical security underscores the point:

Securing heads of state requires balance between risks and objectives.

Heads of state are uniquely valuable – and uniquely threatened – public assets. In recent decades, numerous heads of state have been targeted for, and become victims of, assassination. From John F. Kennedy, to Indira Gandhi, to Yitzhak Rabin, more than 50 sitting heads of state have been killed by extra-judicial violence since 1960, and no amount of wealth, or technological advancement, or population has made a nation immune from this threat.

Nations respond by assigning dedicated security professionals to keep their heads of state safe. These professionals differ in training and resources and in the scope and complexity of their duty, but the core responsibility is the same for all of them.

And for all of them, this responsibility requires a not-always-easy balance between keeping their charge safe on the one hand and allowing him or her to function effectively as a politician and leader on the other hand. The least risky course would be to prevent the head of state from traveling and appearing in public, to keep him or her secured in some bunker or compound far away from the uncertainty and unpredictability of public spaces and crowds.

But of course, in order to function in their role, a head of state must lead (often both a nation and a political party), and this leadership requires exposure to the public, to unvetted people in open places that are impossible to completely control. This exposure is inherently risky. An effective security team will balance risk with benefit as they advise the head of state, alternatively pushing back against and acquiescing to risks based on a complete analysis that includes not just the risk, but also the justification for taking a particular risk. Heads of state ultimately make their own decisions much of the time, and they may disregard the advice of the security team (as when in 2009 Barack Obama “overruled advisers who suggested that he should stay in his car” during the inaugural parade and instead took the traditional walk down Pennsylvania Avenue),[1] but a security team that makes plans and gives advice without considering the goals and priorities of the head of state may quickly find itself out of sync with its customer and fail to become a trusted partner for planning.

Security Architecture should be driven by business risks and should be an appropriate response to those risks.

The same is true for Security Architects. If a Security Architecture does not account for what the organization is trying to accomplish with its assets, the Security Architecture has a higher chance of being marginalized and unhelpful. For the organization, security may be less important than risks to the organization’s strategic or financial objectives.

Assessing and maintaining awareness of the organization’s goals and priorities requires Security Architects to develop a business risk profile and understand the interactions between risks and the organization’s objectives and drivers. Assumptions about these interactions without researching the actual business context will lead to inadequate architectural responses.

ISO 31000:2018: Risk Management – Guidelines (see Referenced Documents) offers the following guidance about establishing the organizational context for risk management:

By establishing the context, the organization articulates its objectives, defines the external and internal parameters to be taken into account when managing risk, and sets the scope and risk criteria for the remaining process. … The internal context is the internal environment in which the organization seeks to achieve its objectives. The risk management process should be aligned with the organization’s culture, processes, structure, and strategy. Internal context is anything within the organization that can influence the way in which an organization will manage risk. It should be established because:

·         Risk management takes place in the context of the objectives of the organization

·         Objectives and criteria of a particular project, process, or activity should be considered in the light of objectives of the organization as a whole

·         Some organizations fail to recognize opportunities to achieve their strategic, project, or business objectives, and this affects ongoing organizational commitment, credibility, trust, and value

 

2                        Principle 2: Context

Assume context at your peril.

Security systems or solutions that are designed for one environment may not always be transferable to effectively work in a different environment.[2]

A typical systems design cycle gathers requirements that drive the system architecture, then the system design, and eventually the implementation. However, this cycle is also a transition from general concepts and capabilities to those specific to the target requirements, so part of this transition from design to implementation is the removal of components or services not essential to the specific target use-case. This will almost certainly limit the applicability of the security system to other use-cases.

When a security system is designed, or when security qualities are built into a business system, other architecture and design elements including the specific use-case for the system, the sensitivity of its data, requirements for connection capability, the typical threats that may impact the systems, and other things related to the system architecture are taken into consideration. As part of the design process, all of these are fed through a risk management process to determine the necessary security capabilities.

This is not an argument against reuse, and there are many benefits to reusable component-based architectures. However, if the security system is reused for a different use-case, sometimes as a way to save time and effort, then a new risk analysis needs to be done against any differences in the two use-cases. Otherwise, the existing security capability may not be enough to counter any changes in resource value or attack surface in the new use-case. Or it may be overkill, and result in wasting resources. One suggestion is that security system architectures and designs include a description, as part of their terms and conditions, of the risk environment in which the security system was intended to operate.

An example of this is the use, in the United States, of state drivers’ licenses for identification at US airports. Drivers’ licenses were and still are issued as evidence that the owner had passed the driving test for the state issuing the license. But they were not issued as strong proof of the driver’s identity and furthermore identity requirements still vary significantly between the different US states. The US Department of Homeland Security (DHS) is rectifying this by issuing minimum identity proofing and vetting requirements for states to apply to their drivers’ licensing programs if the licenses are to be accepted as airline passenger identification. And DHS is also issuing higher-quality identity cards through several US Trusted Traveler programs.

Drivers’ license requirements are excessive for other uses, such as providing proof-of-age for purchasing tobacco or alcohol products. While states have (again differing) minimum age requirements for such purchases, there is no requirement to pass a driving test in order to make the purchase.

A common mistake in systems design is to confuse the access control processes of identification, authentication, and authorization. These may be precisely defined as:

·         Identification – the presentation of an identifier so that the system can recognize and distinguish the presenter from other principals

·         Authentication – the exchange of information in order to verify the claimed identity of a principal

·         Authorization – the granting of rights, including access, to a principal, by the proper authority

A principal is defined as: “an entity whose identity can be authenticated”. For systems where the resources are of low sensitivity or where the identities of end users are tightly controlled, authentication may also serve as a crude form of authorization by allowing access to any user who can authenticate, but this removes any access control granularity.

Other systems may be sensitive enough that they require specific authorizations for services or data access. A challenge arises when dissimilar systems are connected together either as part of automating existing capability or as collaboration requirements when two different enterprises are parts of the same supply chain.

A specific class of this problem is created when Industrial Control Systems (ICS) are connected to the Internet. These systems historically have been isolated, closed systems, running proprietary protocols with little security other than their isolation and lack of interoperability. Connecting them to the Internet exposes them to malware and deliberate attacks from the entire planet, yet many of these systems have been connected without conducting risk assessments, upgrading protection controls, or otherwise re-examining their susceptibility to attacks and other associated threats.

There are two distinct use-cases for identification/authentication/authorization during the lifecycle of managing the access of an entity (or principal – a human user or a machine device) to other services and resources:

·         Registration – the process of introducing a principal to a closed system community for the first time

This requires some original proof of identity and authorization to join the community under the security policies of the domain registration authority. If this process is weak, then it may be possible for fake principles to be registered, and all subsequent authentication checks and access controls are useless. The strength of the registration process will determine the degree of trust that can be placed in the registered principle. See Principle 5: Trust for a discussion of trust.

·         Real-time access – the process of testing the identity, authenticity, and authorizations of an already registered principal during real-time system access

This real-time access decision-making may involve some context-related rules such as day of week, time of day, point of access, and type of access requested. Such context rules are set up at the time of registration and form part of the privilege profile associated with the principal.

The role of the registration authority may be entirely separated from that of the access control authority. For example, an employee is registered as such by the HR department, but access to applications functionality is the responsibility of the business unit that owns the application. The registration authority must maintain a process to handle “joiners, movers, and leavers” to ensure that registrations are up-to-date; otherwise, real-time access may be granted to principals not so entitled. This is an area where the Security Architect must involve multiple organizational departments in the design and operation of the identity and access management systems and processes. While complex technical solutions may be required, the overall process architecture is the most critical to success.

Another layer of context-based access control may be added by the use of role-based access rules (such as job function) or attribute-based access rules (such as up-to-date skills training record), or a combination of both.

A registration authority will most often be global in its remit, although local registration authorities may be used to carry out the registration process operations closest to the principals being registered. Real-time access control should be carried out closest to the services and resources being protected and business-enabled. “Think global and act local” is a mantra often used.

 

3                        Principle 3: Scope

The Security Architecture shall be confined to the specified System of Interest (SOI).

It is important to have a clear definition of the scope of the Security Architecture. In this respect, the concept of an SOI is useful (as defined in ISO/IEC/IEEE 42010:2011).

SOI implies that there is a boundary, inside which are the concerns of the various stakeholders. Beyond this boundary lies the system environment. However, the boundary should not be assumed to be a “secure perimeter”. The interfaces between the SOI and its environment may have many different characteristics depending upon the business context of the SOI.

Examples of SOI are: enterprise; extended enterprise, business application system, enterprise infrastructure, solution, enterprise service bus, identity and access management service, etc.

To be useful, the SOI description should include: the elements that comprise the SOI, the boundary, and the interfaces to the environment. The SOI elements will include people, processes, and technology.

As a general guide, opportunities and threats originate in the system environment and strengths and vulnerabilities exist inside the SOI. Figure 1 shows this in graphical form in a SABSA domain diagram. The SOI is a sub-domain of the system environment super-domain.

Picture 4

Figure 1: SABSA Domain Diagram

The important thing to recognize about a domain diagram such as this one is that the super-domain is pervasive across the sub-domain. All the elements in the sub-domain are also elements of the super-domain (as in set theory). Thus, the “insider threat” means a threat originating in that part of the system environment that is congruent with the SOI sub-domain.

4                        Principle 4: Intelligence

Security systems shall leverage intelligence to govern their responses.

Enterprises face threats from increasingly prolific and versatile adversaries. Countering those threats requires knowledge of the adversary’s intentions, capabilities, means of contacting the enterprise, and overall threat capability. Intelligence about the motives of the adversary may help to identify the most likely targets for an attack and the type of attack intended. This helps to prioritize the most important areas in our systems landscape where cyber defense will be effective. We need to understand what makes us a juicy target and where the sweet spots are from the viewpoint of the adversary.

Types of adversary might include competitors, spoilers, anti-social individuals, foreign nation states, terrorists, organized crime, pressure groups with “ethical” agendas, ex-employees and disaffected current employees, and any other party that might bear a grudge or see an advantage in bringing the enterprise down. The business should compile a realistic list of the types and identities of those that might be relevant to its business objectives.

The Security Architecture needs to foster the development of security systems and services that can respond to an understanding of these threats. This is best accomplished by a methodical analysis of the potential threat scenarios, based on the threat actors, who they are, what their objectives are, and why we appear on their target radar. One such methodology is described in the SABSA book Enterprise Security Architecture.[3] The method is briefly summarized by the diagram in Figure 2.

Figure 2: SABSA Threat Scenario Modeling

By analyzing the threat actor’s capabilities, motivation, and opportunities for launching an attack a series of expected threat scenarios can be constructed.

Beyond that there is a need to first identify the potential threat actors. Most enterprises will have a good view of which their adversaries are, but from time to time new actors come into play. Monitoring and analyzing the sentiments being expressed in the social media and news channels can reveal when new opponents are emerging (otherwise known as “opinion mining”). So it follows that intelligence gathering should encompass both known adversaries and the general background noise from which can be extracted new information about sources of threat.

Threat intelligence may be used to plan and scale response strategies and tactics, processes, emergency communications, and technical defenses. Threat intelligence is a sub-set of the more general “business intelligence” which covers the use of tools and best practices to analyze open source information to improve and optimize business decisions and performance. Business intelligence looks for both opportunities and threats.

Use of intelligence in the architectural planning stages will guide the creation of IT, the Internet of Things (IoT), and other business systems with a minimal attack surface, thus reducing the potential target space.

Intelligence may also be used during an attack to redirect existing resources where they will be most effective. Use of intelligence during an attack can help in automating defenses shortening the attack period, thus reducing the overall loss.

 

5                        Principle 5: Trust

Security shall provide trusted systems that accurately model the nature, types, levels, and complexity of trust that exists in the business entity relationships.

Trust is a characteristic of human relationships. However, we may imbue non-human objects with trust. When we do this we are really saying that we trust or are trusted by the humans that create and operate these objects. Thus, a trusted system is one in which we trust the people that are involved in the system’s lifecycle processes – requirements definition, conception, design, construction, operation, and retirement.

Whilst we often talk of “trusted” versus “untrusted”, the situation is never black and white. Trust is never a binary quantum. Rather it is a long continuum of grey scale, and rarely ever either black or white. The laws of the universe tell us that there is uncertainty of outcome of events at every level (otherwise known as risk – see ISO 31000:2018 and Principle 1: Business Risk-Driven Security). There is always a finite risk that trust will be betrayed, whether deliberately or accidentally.

The level of trust in a relationship is based on several existential experiences of the other person or organization (principal). Body language, facial expression, attitude, reputation, dress code, context, and many other factors influence our risk assessment of whether or not to “trust” someone and at what level that trust should be invested. As the relationship matures we will modify (upwards or downwards) the level and type of trust that we hold for that other principal. Unless the social signals indicate otherwise, humans usually begin a new relationship with a default setting of some level of trust, taking people on face value. The abuse of this trait is what makes social engineering easy to achieve for the determined and skillful confidence trickster.

As an example of establishing trust, consider going out one evening with the intention of social interaction and meeting new friends. You visit a local bar and strike up conversations. The choice of venue is one of the first factors to consider – what types of people inhabit this location? Then if you get past the visual tests and decide to start a conversation, the progress is slow and the initial level of trust is low. It may take several meetings to establish any sort of friendship, seeking out common interests and moral values. Among the first topics to be explored will be the exchange of names (identification) and background checks; for example, Where are you from? What work do you do? You slowly build a picture of the other party on which your trust level will be based. You also make a risk assessment as to whether you are being told the truth.

Next time you go out you attend a house party – the birthday party of a close friend or relative. You meet new people. You introduce yourselves by exchange of names (identification) but almost certainly an early question will be: How do you know the host (authentication)? Answers will vary from “Oh we were at college together”, through “I’m his sister in law”, to possibly “I don’t – I just drifted in through the front door with some other people”. Depending on the answer you get, a quantum leap in trust may be established. In this example the host is being treated as a “trusted third party”, allowing the relationship to progress more quickly than the previous encounter in the downtown bar. Exchange of credentials “certified” by the host (authentication) allow for a new level of trust to develop.

Trust is also very granular in type. Trusting a person to perform surgery on your brain is very different from trusting them for almost any other reason. I may trust you to lend you 100 dollars until the end of the week, but if you want to take my daughter out to dinner my response may be very different.

Trust is almost infinitely variable in both level and type. Nevertheless, we have some tools for analyzing trust into its component parts. The following is a short summary of the method used in the SABSA framework for trust modeling and analysis. Alice and Bob are two principals who will need to trust one another for some business purpose. Carol is a third party known to and trusted by both Alice and Bob.[4]

·         Simple one-way trust

Alice trusts Bob for one thing only. Bob may be unaware of this trust. For example, I may trust the BBC News website as an authentic and current source of news. The BBC does not need to know this.

·         Two-way trust

Alice trusts Bob for a number of things, and Bob also trusts Alice for a number of things. The trust may or may not be symmetrical and will probably be complex and asymmetrical. Bob trusts Alice for her DIY skills whereas Alice trusts Bob for his cookery skills, among other things. The “trusting” and being “trusted” are not mirror images.

·         Transitive trust

In this case mutual trust of Carol (the trusted third party) allows both Alice and Bob to trust one another for certain things to which Carol can attest (but not for other things beyond Carol’s experience of the parties).

Even the most complex trust relationship can be analyzed top-down into a series of simple one-way trust relationships – how does Alice trust Bob and how does Bob trust Alice? And how do they both trust Carol? By performing this analysis rigorously, any business relationship can be broken down into its individual simple one-way trust components. The Security Architect must first understand the details of this business trust model in order to design and build a technical system that models the human trust relationships accurately. Cutting and pasting off-the-peg trust models such as that defined in Security Assertion Markup Language (SAML™) is unlikely to reflect the true business model, although such standards can be useful in guiding the path to accurate trust modeling.

 

6                        Principle 6: Holistic Approach

Security requirements shall be integrated with other requirements, both functional and non-functional.

Security requirements are often described as Non-Functional Requirements (NFRs) and should not be viewed in isolation from functional requirements or other NFRs. Security Architecture only works effectively if all requirements are considered as part of the holistic risk profile of the SOI.

Architecture is often seen as a series of views from a series of viewpoints. These views are often described as architecture domains. Examples can include: Business Architecture domain, Application Architecture domain, Information Architecture domain, Data Architecture domain, Service Management Architecture domain, and Technology Architecture domain.

Views and viewpoints are an important concept in architecture (see ISO/IEC/IEEE 42010:2011). The model is analogous to a valley surrounded by hills, with the river in the lowest part of the valley. A stakeholder will have a different view depending on where they are standing – the viewpoint. They all see the same valley, but the view from the bridge is very different to the view from the northern hill, which is different again to the view from the southern hill or the col on the western end of the valley.

The Security Architecture domain is another view from another viewpoint, but is very different inasmuch as it is a cross-cutting domain that must address security and risk management across all other domains in a holistic way. Risk cannot be separated into architecture domain silos that do not interact. The Security Architect must see the valley from all viewpoints simultaneously, a kind of holographic view that can be rotated and spun.

This need for holistic Security Architecture also implies that the development of Security Architecture is a multi-disciplinary activity, requiring the input of expertise and knowledge from specialists in each of the architecture domains to be covered.

Also implied is the need for a design authority in the form of a Chief Security Architect (or similar designation) whose job is to ensure that all of the various inputs are integrated into a holistic design that gives appropriate weight to every requirement and its design response. See also Principle 7: Simplicity, Principle 11: Optimal Conflict Resolution, and Principle 12: Communication Clarity.

 

7                        Principle 7: Simplicity

Systems and services shall be as simple as possible while retaining functionality.

Complexity is the enemy of security and must be managed into simplified sub-structures whilst maintaining the holistic approach (see Principle 6: Holistic Approach). Security Architecture may benefit from a Service-Oriented Architecture (SOA) approach in which we see “Everything as a Service” (EaaS). Service performance measurement is essential to meeting top-level business performance targets.

One of the main goals of architecture of any kind is to manage complexity. The highly complex SOI has to be broken down through top-down decomposition, starting with the highest-level business goals and creating ever-simpler layered presentations of the SOI that can be addressed layer by layer.

In particular, highly complex systems have a tendency to exhibit emergent properties. These are unplanned, unexpected, and often unwanted system behaviors that arise because of complex system component interactions that were not foreseen by the system designers. Simple examples of emergent properties include: deadlock when two or more processes compete for the same system resources; and traffic congestion in networks when capacity is exceeded. Clifford Stoll described one of the earliest cyber attacks through exploitation of an emergent system property in his book “The Cuckoo’s Egg: Tracking a Spy Through the Maze of Computer Espionage” (see Referenced Documents). In this case study, the proper designed workings of the UNIX® operating system were used against it to introduce malware at root privilege level.

System security vulnerabilities arise mainly from two sources: errors in design and emergent properties. The two are not the same and should not be confused. Emergence is a systems engineering phenomenon that cannot be designed out at the outset. Complexity itself is what causes emergence.

Many of the exploits used to attack cyber systems are the result of an emergent property discovered by the SOI opponents and subverted to create an attack vector aimed at bringing that emergent property into play in such a way as to compromise the system design.

The way to simplify complexity is most often through top-down decomposition into a series of layers. This does not guarantee that emergent properties can be eliminated, but it provides a means for focused inspection of the workings of the SOI at increasing levels of detail and facilitates security by design. Figure 3 shows two similar decompositions.

Picture 5

Picture 6

Figure 3: IT and SABSA Business Stacks Compared

On the top is a fairly conventional representation of the IT stack. Below it is a similar view from a business perspective – the SABSA Business Stack™. In both cases the top-level assets (information in the IT stack and the business value chain in the business stack) are decomposed into a layered architecture model.

Applying a service-oriented approach to these layered stacks allows an approach of treating “Everything as a Service” (EaaS). Such models are supply-demand models, in which the layer above demands services from the layer below, and the lower layer delivers (supplies) those services upwards. The layer interfaces are well defined and characterized by Service-Level Agreements (SLAs) with performance measurements and targets defining the service levels.

Figure 4: Supply and Demand Interfaces

Each layer should be independently risk managed within the context of that layer so as to fulfil the SLA. The layers should be independently architected so that a change in technology (for example) can provide exactly the same service upwards from that layer. Each layer performs as an intermediary from layer to layer. Layer disintermediation should be forbidden, meaning no layer should attempt to skip downwards to obtain services from more distant layers, or services that are not exposed at the service interface.

Figure 5: Layer Disintermediation Not Allowed

Disintermediation is one way that emergence can occur because the integrity of the layered control framework is compromised. For example, an application using a network has no way to tell whether or not certain network security services are switched on or off. Reliance on the security of a lower layer is very dangerous, unless it is a service that can be delivered through the layer interface and objectively measured under the terms of the SLA. Network performance for latency, speed, and accuracy of delivery is visible to an application. Confidential transport of Protocol Data Units (PDUs) is not.

This approach requires a measurement approach to be used in which the performance of each layer can be calibrated against targets. In SABSA, this is achieved through the Business Attribute Profile™ (BAP). Attributes at each layer are specified in terms of the characteristics of that layer, but at each layer there is an inheritance downwards so that each layer BAP contributes to the performance of the BAP of the layer above.

Each attribute in a BAP has a measurement approach specified, a specific metric, and a performance target. Everywhere in the stack there is a measurable profile of how that layer should work and a reporting structure on how it is actually working.

The Security Architect should approach architecture as adding value, not just as a cost to the business. This is why the SABSA Business Stack model (see Figure 3) has the business value chain as its top layer and all other parts of the stack inherit these value creation characteristics. If a lower-layer activity does not ultimately contribute value to the business value chain, then it has no value in the stack and should not be performed. This provides two-way traceability up and down the stack. Justification for everything done throughout the stack (Why are we doing this? Does it derive from the highest level?), and completeness of solutions to meet requirements (Can we trace down to find implementations that fulfill all the business value creation requirements?) is explicit in this approach.

Complexity makes it difficult to analyze risk and detect threats. Security mechanisms themselves, where possible, should not add additional complexity but should be simple, scalable, and easy to manage.

Complexity also increases the chance of error and every error is a potential security flaw. Today’s operating systems and sophisticated applications contain millions of lines of code. They will never be completely bug-free and therefore never totally secure. Since every bug is potentially a security vulnerability, security systems must take this into account and protect these resources in spite of the potential vulnerabilities. Security systems themselves should be as simple as possible which means the amount of trusted software and hardware should also be as limited as possible. An example of a data security system that fails the simplicity test is the use of a typical Digital Rights Management (DRM) service, as shown in Figure 6.

Figure 6: Digital Rights Management Attack Surface

A secret key-based encryption system has a pretty small attack surface that includes just the computers used to process the encryption algorithms and send the messages or data between the peripherals used by the end users to interact with the system. However, secret key cryptography has its own practical limitations such as poor scalability and the requirement to exchange encryption keys out of band in a secure fashion.

Public key technology solves the key exchange and scaling challenges of secret key encryption but trades those weaknesses for an identity management problem. That is, how does an end user securely identify the public key of the person with whom the end user wants to communicate? This is typically solved with a collection of services usually grouped under the heading of Public Key Infrastructure (PKI). A PKI system includes many more servers and complex topologies that result in a greatly expanded attack surface. Any mistake in setting up the PKI or vulnerability in the server software provides an additional attack path against the data being protected.

A typical DRM system leverages a PKI for authentication and adds additional servers and services to process access requests, make corresponding access control decisions, and protect resources from unauthorized access. This further increases the security systems’ complexity and adds even more attack vectors.

Ultimately in any PKI implementation there is a top-level root private signature key and associated public certificate that must be trusted throughout the lower layers of the system. Trust is an architectural issue in itself, explored earlier in this Guide (see Principle 5: Trust). However, trust is not created by technology but by the design and operations of the system. In other words, do we trust those who run the service to do it securely? People and processes are the key elements for building trust, not technology.

Another risk of building complex systems is the ease at which organizations become attached to and take them for granted, and in the case of security systems trust them too much, forgetting the business or security problem the system was intended to solve. Keeping with the crypto example, here is Peter Gutmann’s[5] assessment of the Online Certificate Status Protocol (OCSP), currently in wide use in PKIs today:

“Although OCSP has many advantages over CRLs, its major shortcoming is that instead of providing a simple yes/no response to a validity query, it instead uses multiple, non-orthogonal certificate status values because it can’t provide a truly definitive answer. The possible responses to a query are “not-revoked” (confusingly labeled “good” in the OCSP specification), “revoked”, and “unknown”, where “not revoked” doesn’t necessarily mean “OK” and “unknown” could mean anything from “this certificate was never issued” to “it was issued but I couldn’t find a CRL for it”.

This leads to the peculiar situation of a mechanism labeled an online certificate status protocol that, if asked “Is this a valid certificate?” and fed a freshly-issued certificate can’t say yes, and if fed an Excel spreadsheet or an MPEG of a cat, can’t say no. Contrast this with another online status protocol in use for the last decade or so, credit card authorization, for which the returned status information is a straightforward “Authorized” or “Declined” (with an optional side order of reasons as to why it was declined).”

So organizations are left to assume that the PKI trust mechanisms work in ways they may not and assume that the trust is there in order to continue doing business.

Stepping away from PKI for a moment there is an even broader area for security management where implicit trust is assumed – enrollment or registration of people or entities as being authorized members of some business community. If a fake entity or person can become registered as a legitimate party then all other security controls from that point on are more or less broken. The security and integrity of the enrollment process is critical to the end-to-end security of the system, and has little to do with technology. Social engineering and identity theft are two examples of exploiting the implicit trust that humans have in one another as a default setting for making society work. Once the fake party is recognized as genuine they are afforded all the privileges that are associated with an authorized party. So, whilst seeking to make enrollment a simple process, over-simplification can lead to opportunities for abuse.

Einstein is reputed to have said “Everything should be as simple as possible, but not simpler” and this is true of security systems. Sometimes, as in the case of PKI, there are few better alternatives. But this should not stop either the evolution of security services nor remove the responsibility of an architect from selecting and configuring those services to present the smallest attack surface possible and to securely minimize the overall security system complexity.

Jericho Forum Commandment #2 states it this way: “Security mechanisms must be pervasive, simple, scalable, and easy to manage”, and promotes the concept that this can be achievable by breaking down a simplex security mechanism into standard, simple building blocks that can be used as components of scalable security mechanisms.

 

8                        Principle 8: Reuse

Where feasible, reuse trusted system development practices and system components.

The IT world is one of rapid change as new technologies emerge that enable new business models while enhancing old ones. Accompanying those changes are threats that evolve in scope, magnitude, and complexity. Security solutions need to be innovative and agile enough to counter those threats.

However, the Security Architect should not reinvent the wheel, as the saying goes, but rather start with proven constructs, building on solid foundations where possible. This applies to the development process, architectural conceptions, design choices, and technology implementations.

Regardless of specific context, Security Architects are engaged with problems that are common to all organizations. An architect should never start from scratch; it is always more efficient to begin with common frameworks and reference architectures and tailor them for the specific context.

Organizational architectural disciplines, like Enterprise Architecture and Security Architecture, are relatively young, but they are not immature. There is a wealth of well-developed and rigorous resources available for a Security Architect to use, and they should use them. There are also many mature components that are applicable to reuse.

When reusing architectural building block or system technologies, the architect should not just cut and paste them from previous systems. See Principle 2: Context and Principle 11: Optimal Conflict Resolution for additional relevant insights that provide constraints on reuse. During reuse, the architect should also consider specific system security requirements. This means that being specifically and contextually risk-driven is essential to the development of holistic Security Architecture and to resolving conflicts and optimizing solutions.

Following is a notional list of categories for reuse including examples. These are not exhaustive. Reusing these components enables the architect to leverage the time, expertise, and negotiations that have gone into their creation. This does not mean that they will not continue to evolve as business models and threats change, but that evolution is usually done by a formal organization with significant experience and cross-organizational stakeholders.

Framework Examples

The US NIST Cybersecurity Framework (CSF) is the product of over 150 RFI responses and many stakeholder meetings. Although developed in the US, the CSF has had both international input and international adoption. One distinguishing factor is the emphasis on applying risk management to the various framework sections. The CSF also has mappings to many control focused frameworks including: the ISO 27000 family,[6] COBIT®, ISA, and NIST SP 800-53.

The ISO 27000 family is a set of controls and processes to organize and monitor enterprise security functions. It evolved from BS 7799[7] which is based on original work done by Royal Dutch Shell. ISO 27000 has significant adoption, accompanied by readily available training material and auditing services.

ISO 31000:2018 defines a cyclic risk management process that focuses on identifying, evaluating, and treating risks. It is complemented by the NIST Risk Management Framework (documented in NIST SP 800-37), another cyclic risk management process that focuses on the effectiveness of controls in countering risk, or rather threats that affect risk.

The SABSA Balanced Risk Model provides an organized structure that defines risk-related components. It is balanced because it looks at both the potential gains and the potential losses from a business decision or activity. SABSA does not itself contain control libraries, but provides a process by which controls for other frameworks can be selected for their suitability to meet the business risk profile. It is a framework for integrating those other more technology-focused standards.

The Open Group Open FAIR™ method (based on FAIR or Factor Analysis of Information Risk) includes a taxonomy that precisely categorizes the components that determine the potential magnitude and potential frequency of future loss events. This precision allows for quantitative loss calculations.

Protocol and API Examples

Note:      While many of these examples evolve as threats evolve, their adoption provides at least two security advantages. First, they are designed, created, and collaboratively evaluated by seasoned security experts in an international environment. And second, they provide some measure of interoperability. Interoperability is important to system security in that it allows cooperating environments to use common security protocols to protect information and other resources and it facilitates system and component upgrades or security patches.

APIs are typically specific to programming languages, operating systems, and network environments. Where possible the architect or developer should use provided, standard APIs. Some standards-based security-specific services, protocols, and APIs are listed below.

WiFi Protected Access (WPA), developed by the WiFi Alliance, provides security for wireless connections.

Transport Layer Security (TLS), developed by the IETF, provides network security that protects applications that use the TCP protocol suite. Lower in the network stack, IP Security (IPSec) provides IP layer security functions.

Domain Name System Security Extensions (DNSSEC) adds security to the DNS protocol by enabling verification of the mapping between user-friendly DNS names and IP addresses.

Security Assertion Markup Language (SAML), developed by OASIS™, is a set of protocols, message bindings, and profiles that provide authentication information.

eXtensible Access Control Markup Language (XACML™), also developed by OASIS, complements SAML by providing authorization information. The use of XACML also encourages the separation of access control decisions as described in Principle 19: Access Control.

Kerberos, originally developed as part of MIT’s Project Athena and now an IETF standard, provides secure authentication capability.

Other security API examples are Open Authorization (OAuth), Open Web Application Security Project (OWASP), and the Generic Security Service Application Program Interface (GSS-API).

Cryptographic Algorithm Examples

Note:      The development of cryptographic algorithms and protocols is a highly specialized skill and vetting those algorithms takers significant resources. Even well analyzed algorithms have been discovered to contain security flaws. For some reason, this is an area prone to fraudulent sales efforts for new, untested algorithms. These should generally be avoided at all costs.

A significant amount of research and development of cryptographic algorithms resides in various national intelligence agencies and until the past couple of decades there has been little public involvement in their development. More recently NIST, security product-focused companies, and individual researchers have released high quality cryptographic algorithms. These include:

·         Message digest algorithms:

  US NIST SHA-2 (Secure Hash Algorithm) family

  ISO/IEC 10118-3:2004 (Whirlpool)

  Lecture Notes in Computer Science (LNCS) 1039, 1996, RIPEMD

·         Block ciphers:

  US NIST Advanced Encryption Standard (AES)

  IETF RFC 2612: The CAST-256 Encryption Algorithm

  Twofish Block Cipher, Bruce Schneier

  RSA Rivest Cipher 6 (RC6)

  IBM Monitoring, Audit, Rationalization, and Security (MARS)

·         Stream ciphers:

  US NIST Advanced Encryption Standard (AES) Counter Mode

  Salsa20, Dan Bernstein

  RSA Rivest Cipher 4 (RC4)

  ETSI Digital Enhanced Cordless Telecommunications (DECT) Forum Dynamic Channel Allocation (DCA)

·         Public key algorithms:

  IEEE P1363: A Comprehensive Standard for Public-Key Cryptograph

  RSA Public Key Cryptography Standards (PKCS) #13 Elliptic Curve Cryptography (ECC)

  US ANSI Elliptic Curve Digital Signature Algorithm (ECDSA)

  RSA 2048 and variations

  US Digital Signature Algorithm (DSA)

 

9                        Principle 9: Resilience

Secure systems function while under duress.

Security controls that function properly only under ideal conditions cannot be relied upon. As with all IT architecture, well-designed Security Architecture incorporates fault-tolerant sub-systems and redundant equipment to ensure continuous service in the face of unusual pressure. But resiliency in architecture is about more than just equipment; it must include people and process as well.

A key characteristic of resiliency is planned system degradation – controlled degradation instead of uncontrolled failure. In response to a severe attack, the proper response – the planned response – may be to take a system offline in order to preserve data integrity and security. In such a case, the system functions as planned even when shut down. It is a controlled descent and landing; not a crash.

Resilience as discussed in this principle is not limited to, or even primarily applicable to, equipment. Resilience in people and process is also required, and such resilience requires decisions to be made before events transpire.

An organization should seek to avoid making significant decisions, especially decisions with potentially existential consequences, while under periods of duress. Such decisions should be planned ahead of time as much as possible. An organization should analyze possible failure scenarios and produce mission rules that will prescribe guidance during future periods of duress. These mission rules (which are typically characterized by statements such as “if … then … else …” or “case … 1 … 2 … n … else”) provide resiliency in process. As not all cases are foreseeable, especially in regards to emergent properties of a system, an exception handler (else) on the end of the process description (as in the example above) may enumerate the best outcome strategy.

An excellent example of designing for resiliency is the use of Chaos Monkey in high-volume cloud service data centers.

Chaos Monkey (originally developed by Netflix® and released as open source in 2012) is a software service that identifies groups of systems and randomly terminates one of the systems in a group. It was created to simulate the unpredictable failure states possible in any production system and is akin to having an attacker physically pull random cables out of a data center infrastructure. Chaos Monkey runs during working hours on live streaming systems serving millions of paying customers. It proves resilience by continuously testing automated failover in a time-sensitive online service: high definition video streaming.

Resilient systems respond dynamically to adverse events, balancing mission performance requirements against the needs for data protection and infrastructure protection. It may be inevitable that performance will degrade under duress, but it should degrade gracefully, sacrificing lower-priority objectives for higher-priority ones as pressure increases.

In 2016, a massive botnet[8] Distributed Denial of Service (DDoS) attack against DNS provider Dyn[9] drew attention for causing users of Twitter, Amazon.com, GitHub™, and other major Internet platforms to experience service outages. As the attack increased in strength and shifted its focus to different data centers around the globe, Dyn was able to supplement their automated response techniques with additional tactics, including policy manipulation, filtering, and traffic-shaping.[10] Dyn’s ability to respond adaptively to millions of shifting and fluctuating attack vectors is an example of resilient architecture.

Like all Enterprise Architecture, resilient Security Architecture must be designed around the organization’s mission, risk profile and appetite, and other business considerations. The business drives what services are protected and at what cost, but the architecture should ensure that whatever is protected isn't simply protected when things are going well, but when they’re going badly as well.

 

10                    Principle 10: Process-Driven

The security development process shall address required time horizons and engage stakeholders using a defined lifecycle.

Strategic

The goals of Security Architecture are to support the long-term business strategy of the organization. Security Architecture is itself a strategic activity requiring long-term investment and management support.

Tactical

Security Architecture is developed and implemented in a series of steps: change programs and projects that bring the long-term vision into real existence.

Operational

Security Architecture also provides the technology, tools, and processes for day-to-day, secure, risk-managed business operations.

Business Continuity

There is a fourth time-horizon in the form of unexpected events and outcomes that need to be managed through contingency planning, and part of Security Architecture provides the capabilities to deal with these unplanned business interruptions.

Stakeholders

The Security Architecture development process should engage with all stakeholders that have a valid interest in the outcome of the activity. These will include business owners and business users, along with customers, suppliers, service providers, technologists, security specialists, operations staff, auditors, regulators, and perhaps many others. ISO/IEC/IEEE 42010:2011 refers to stakeholders as having concerns about an SOI.

Lifecycle

There is a lifecycle associated with the Security Architecture development process. Example definitions of this lifecycle can be found in various (security) architecture frameworks such as the TOGAF standard and the SABSA framework.

 

11                    Principle 11: Optimal Conflict Resolution

Security shall optimally resolve stakeholder conflicts by balancing business risks that conflict with one another.

The concerns of stakeholders (see Principle 10: Process-Driven) will often conflict. One role of Security Architecture is to resolve these conflicts in an optimal way, balancing functional requirements and other NFRs with security needs.

These conflicts of interest can be very complex (see Principle 7: Simplicity). They arise from the complex cross-cutting nature of Security Architecture (see Principle 6: Holistic Approach).

As a simple example, consider the interaction of three different risk factors: usability, cost, and security level. These three bundles of risk management requirements are in constant tension with one another. Maximizing usability may increase cost and reduce security. Maximizing security may increase costs and reduce usability. Minimizing costs may reduce both usability and security. The ultimate goal of Security Architecture is to optimize these and other risk factors at the holistic SOI level.

Figure 7: Risk Factor Option Model (Derived from the Jericho Forum Cloud Cube Model developed by Adrian Seccombe)

In order to resolve conflicts, optimize solutions, and balance the costs and benefits, it is the role of Security Architects to make architectural decisions in collaboration with business stakeholders.

Security Architects will need to make architectural and solution design decisions in which the resolution of conflicts and solution optimization become the key issues. Cost-benefit analysis is also a consideration to ensure that the solution delivers the best business value and that the value contribution can be traced back to the value chain at the top of the business stack.

Architectural decisions are related to architecture principles but are not the same. An example of such a decision would be whether or not an enterprise should use a single firewall or a set of serially connected firewalls of different types to protect itself. Such a decision would be related to this principle and should consider costs (acquisition and operations, including complexity) and relative benefits and explicitly state why that decision was made and how it supports the principle.

All architecture decisions should be fully documented and recorded for posterity: see Principle 12: Communication Clarity.

 

12                    Principle 12: Communication Clarity

Security shall leverage common terminology that enables effective communication between business and technology stakeholders.

In this respect, the Security Architect must be at least bilingual, fluent in both the language of business stakeholders (“business speak”) and the language of technologists and technicians (“technobabble”). This is an essential skill to support the mutual understanding of the various stakeholders’ concerns (see Principle 10: Process-Driven) and to resolve the conflicting concerns that can arise (see Principle 11: Optimal Conflict Resolution).

Beyond this, there may be other stakeholders with other languages (such as “legalese”) whose concerns must be integrated into the overall requirements for security and risk management (see Principle 6: Holistic Approach), and so this implies that the Security Architect should be multi-lingual and fluent across all these different specialist languages.

A further goal of Security Architecture is to provide shared knowledge management. Whilst Security Architecture is itself a concept, the Security Architecture description is a series of documentary artifacts that are produced as deliverables of the Security Architecture activities. These artifacts provide the documented results of Security Architecture teamwork and represent the shared knowledge base on the Security Architecture of the SOI. The use of a common shared language and lexicon of terms is essential to the effective communication and future preservation of the architectural concepts used and decisions made.

An important role for Security Architecture is to document architectural decisions and provide traceable rationales for those decisions so that others can understand why these specific decisions were made.

This ties in closely with Principle 11: Optimal Conflict Resolution. However, to make sure that documentation is useful, the clarity of the decision rationale is essential. The purpose of a documented architectural decision is to:

·         Ensure there is a single, authoritative source for communicating key decisions made about the architecture

·         Record the rationale and reasoning behind each decision

·         Help establish what you knew and when you knew it for the decision – traceability

·         Help to maintain the overall architectural integrity of the solution by ensuring all decisions are consistent with each other

·         Help to ensure the same issues are not addressed more than once because the resolution was forgotten

·         Help to document the architectural thought process and reasoning which might result in choices contrary to established or de facto standards

Architectural decisions are related to architecture principles in that whilst the architecture principles are the rules and guidelines against which architectures are developed, like all good rules – sometimes they are meant to be broken. Architectural decisions can be used to provide rationalization for deviating from these rules and standards, or for defining a de facto when no such direction exists.

One of the groups of professionals who will later ask questions about the architectural decisions is the auditor community – both internal and external. If we are to deter auditors from coming with their checklist of controls and get them to consider holistic Security Architecture, it is essential that we have prepared documented rationales for every architectural decision that was made. We might then move the auditors away from control checklists and get them to audit the quality of the architectural and design processes and the compliance of the architects and designers with those processes and practices.

 

13                    Principle 13: Usability

Security controls shall be user-transparent and not cause users undue extra effort.

Security controls that are hard to use, cause productivity loss, or are otherwise disruptive are often ignored, disabled, or abrogated, leaving resources vulnerable and therefore losing the value of the controls. Security systems that are bypassed or not used are worthless. In addition, because humans are unpredictable and make mistakes, security controls should be tolerant of the potential for human error. Security systems must be as transparent and easy-to-use as possible without degrading the protection that is provided. Solutions that, at the same level of security, reduce interaction with users are preferred over those that cause the users significant extra effort. Violation of this principle results in systems that encourage users to bypass security or increase exposure through increased error rates.

Humans, for the most part, cannot interact with software directly. In many computing use-cases a human relies on a machine surrogate that acts on the user’s behalf as a proxy or principal. These interactions, termed ceremonies by Carl Ellison, between the machine and the human are often not documented or considered in system design, but they are real and are frequent sources of compromise or failure. This relationship is illustrated in Figure 8.

Figure 8: Human-to-Machine Interfaces

The Proxy connection between Ceremonies and Protocols attempts to match dissimilar communication capability. Many security breaches occur where a human interacts with a computer system. And many of these result when the security systems are cumbersome, encouraging people to avoid them where possible or to render them ineffective by making poor choices when options are present. Carl Ellison’s talk on Ceremonies[11] from CRYPTO 2005 illustrates the human-to-computer protocol breakdown.

The classic example of poor usability is the use of passwords. The very properties that make passwords more effective – length and complexity – make them harder for humans to remember. This is exacerbated by requirements in many systems to change them frequently. The end result is that many people choose the same password for multiple systems and also choose passwords that are easy to guess.

Another example is the use of PKI certificates to authenticate web browsers to websites. If the site has an unknown certificate, the user is presented with a pop-up message asking the user to approve or trust that site; something the user is unqualified to do and the reason for the PKI certificate in the first place. The typical workaround is to preload the browser certificate cache with as many certificates as possible, hoping to include those that will be used by the websites accessed by the user. This effectively nullifies any security countermeasure that relies on browser PKI certificates. How do you know that the preloaded certificates are genuine and not the result of some malware attack? You don’t.

In summary, we should not architect or design systems that require people to perform tasks or make decisions outside of their area of competence. Security solutions should augment and interact with humans in such a way as to expand their natural abilities.

 

14                    Principle 14: Security by Design

Security shall rely on specific, proven controls rather than obscurity.

Every security textbook advises strongly against using “Security through Obscurity”. But the real objection is to the practice, often by default, of just hoping security weaknesses don’t get discovered. Readers of Edgar Allan Poe’s “The Purloined Letter” or almost any of Arthur Conan Doyle’s Sherlock Holmes adventures are acquainted with the effectiveness against the general public of hiding information in plain sight, but also its ineffectiveness when confronted by a trained observer. Obscurity can be used to augment a security system but it must be carefully designed in and not just assumed to work. The safest assumption to make is that the attacker has the full documented software source code, the hardware details, and the operator’s instructions open in front of them as they attack your system.

Generally systems designed or sold to end users that are based on some element of obscurity rely on one or more fallacies:

1.          The assumption that because a problem is difficult to solve by one person it is difficult by all.

2.          The assumption that what is difficult for a human to comprehend is equally difficult for a machine to process.

3.          The assumption that strong defenses in one part of a security solution mean that the whole solution is strong.

4.          The assumption that humans will not compromise the security through poor implementation of the system or insecure end-user practices.

The scale and complexity of modern ICT systems is so much beyond the ability of humans to comprehend that the mistake is often made, thus enabling fraudulent products to be marketed and sold, that assumes that something too complex or too vast for humans to grasp is equally impenetrable to computers. Some examples will help here.

An example of a technology where large numbers are hard for humans to grasp but are within reach of machines, although not an example of using obscurity per se, can be drawn from the mechanical encryption devices used during World War II. The Enigma cipher machine is probably the best known of these encryption machines.[12] Its security is controlled by five variables:

1.          A plugboard which could contain up to 13 dual wired cables

2.          Three ordered rotors with 26 input and output points

3.          Serrations on each rotor, allowing for 26 different starting points

4.          A movable ring on each rotor which controlled the rotation of the rotor next to it

5.          A non-rotating rotor that reflected the signals back though the other rotors

The end result is approximately 3 x 10114 possible combinations, later increased to about 2 x 10145 with the addition of another rotor.

However, due partly to design limitations (a letter could never be encrypted as itself, plugboards were bidirectional, etc.), and mostly due to poor security practices by the end users (ending all messages with “Heil Hitler”, transmitting the settings as the first six characters, sending the same message to sites using both three and four rotor machines, etc.) the practical limit was closer to 1 x 1023 which is still a large number and difficult to grasp, but still possible to solve manually, albeit slowly. More efficient breaking of Enigma messages had to wait until Turing, Welchman, and Keen built the Cryptanalytic Bombes.[13]

Quoting Dr. Miller:

“The strength of the large numbers, numbers so vast they are really beyond true comprehension, led the Germans to have absolute and complete confidence in the integrity of the Enigma cipher machine. And in that misplaced confidence, the Germans were absolutely, completely, and fatally wrong.”

This example is included because there are many modern insecure encryption systems, typically based on one-time pads that are internally developed or sold as products on the assumption that a very large key, such as a CD or DVD with a long pseudo-random number filling the disk, is secure. However, as mentioned above, just because very large numbers are incomprehensible to humans it does not mean that machines can’t easily deal with them.

A second related example is the use of steganography. Typically the data is embedded (hidden) in a photo by altering bits to encode the data that make no visible change to the picture itself. This works because a JPG picture describes the color of each pixel with three 8-bit numbers, one each for Red, Green, and Blue. Altering the least significant bits produces subtle color changes in the picture that are not visible to the person looking at the picture, so those alterations can be used to carry non-picture-related information. And with the inability for humans to detect the bits and the ease of capturing photos and distributing them today it would seem that this would be an ideal way to conceal information and distribute it.

However, in a conversation with the former Chief Cryptographer of the CIA, he asserted that they considered steganography to be the same thing as encryption, except that it only had one key – the data embedding/retrieving algorithm. And since the algorithm could not be changed it was relatively easy to extract data once steganography was suspected.

Security therefore should not rely on black boxes or other forms of obscurity. Security should rely on specific proven controls. The parallel case in cryptography is that security relies on the protection of the encryption keys and the assumption is made that the algorithms are well known but the keys are protected. There is a common belief that if the data is encrypted it is by definition secured against attack. What is not widely appreciated is that attacks on cryptographic systems are not usually against the algorithm itself, but against the key management systems and protocols. When someone states that “it’s encrypted”, the next question should be: “How are the keys managed and exchanged?”.

This doesn’t mean that security mechanisms should be deliberately exposed to intruders or attackers, but rather that the implementers should not rely on hidden or secret technologies as the only protection layer. Violations of this principle result in systems that may be flawed because of lack of independent analysis or may suffer catastrophic failures when their designs are exposed.

There may also be valid security business cases for incorporating obscurity into a security system. One common example is the use of Whole Disk Encryption (WDE). If the boot disk is encrypted using WDE, there are only a few viable options for decrypting the key so the operating system can read the disk at boot time. The most secure way to do this is to require a password to decrypt the disk when the Basic Input/Output System (BIOS) is run before the operating system loads from the disk.

A second, less secure option is to hide the encryption key somewhere on the disk, typically in a block marked by the WDE software as a bad block – so ignored by the operating system – that can be read before system boot. A third option is to hide the key in a Trusted Platform Module (TPM) – a cryptographically protected hardware storage module – and retrieve it during boot.

Organizations generally reject the first option since the end user will have to enter a password twice, once before boot and once to access the system after boot. Typically the second option, although not strong against determined attackers, is chosen for ease-of-use and protection against the casual thief who will probably not start analyzing the disk for bad blocks that actually contain data. The third option, probably the best of the three, is not in wide use due to the slow adoption and poor integration of current TPM technology.

It may be equally valid to select the two password options or to choose to use the one password and key hidden on the disk option, but this decision should be based on a thorough risk analysis.

 

15                    Principle 15: Precedence

Use strong protection mechanisms to protect weaker ones, not the reverse.

This principle is really just a special case of the more general principle that things that matter most should not be at the mercy of things that matter less. In the security world, it more specifically means that when multiple security systems are used, the stronger security systems should be used to protect the weaker ones where one system directly controls the security of the other system.

For example, most email systems support the use of encryption keys and certificates to sign and encrypt email. In order to do this efficiently, the keys are often stored somewhere in the file system of the email user’s computer. File access controls provided by the host operating system are typically much weaker than the email cryptographic protection. If the keys are only protected by operating system permission controls, this allows an attacker to read or modify the email by compromising the operating system and extracting the keys instead of directly attacking the cryptographic protection.

To compensate for this, the email system should encrypt the keys, requiring the end user to enter a key decryption key, or even better to use an additional hardware token to decrypt the key encryption key before it can be applied to the email.

This doesn’t apply to the general ordering of layers of security, but rather is specific to security mechanisms that are used to protect each other, not independent security layers. Related to this is the use of two or more relatively weak security mechanisms that combine to produce a stronger control. The combination of a PIN and hardware token is a good example of this. Both are weak separately – the PIN lacks entropy and the token can be lost or stolen – but when used together are much stronger.

 

16                    Principle 16: Device Sovereignty

All devices shall be capable of maintaining their security policy on an untrusted network.

This is another security principle from the Jericho Forum, Commandment #5; this principle addresses a number of characteristics of the modern IT environment associated with the failure of perimeter-based security to act as anything better than a general noise filter; something the Jericho Forum termed “deperimeterization”.

As described in more in Principle 19: Access Control, it is generally more effective for protection mechanisms to be local or close to the resource being protected. This provides a smaller attack surface and increases the chance that the protection will travel with the resource.

In a quote from Marc Goodman,[14] in the modern IT world, it is nearly impossible for any networked resource or asset to be disconnected from the global Internet. And even disconnected resources have been known to be susceptible. Stuxnet,[15] which targeted and destroyed some of Iran’s centrifuges used in their nuclear weapons program, traveled via USB drives which were eventually plugged in. And even the International Space Station has had some malware infections, although not from Stuxnet as was originally reported. The point is that if a resource is networked it is most likely accessible from the Internet at some point in time and its security controls should assume that a direct network attack is possible.

And it’s not just large stationary computing devices or industrial control equipment behind corporate or government networks that are exposed. The proliferation of small, mobile devices also demands that they carry their security protections with them as they transit different environments that have different security risk profiles.

With the very rapid growth of devices that fit into the IoT category, we now have large quantities of devices that often have little or weak security protections, communicate with weak signaling mechanisms, and are designed for a business model that is focused on initial distribution but not after-sales support. Figure 9 illustrates some of the attack vectors associated with IoT devices.

Figure 9: Internet of Things Attack Surface

Bruce Schneier[16] wrote about the challenges of security IoT devices:

“The technical reason these devices are insecure is complicated, but there is a market failure at work. The Internet of Things is bringing computerization and connectivity to many tens of millions of devices worldwide. These devices will affect every aspect of our lives, because they're things like cars, home appliances, thermostats, light bulbs, fitness trackers, medical devices, smart streetlights, and sidewalk squares. Many of these devices are low-cost, designed and built offshore, then rebranded and resold. The teams building these devices don’t have the security expertise we’ve come to expect from the major computer and smartphone manufacturers, simply because the market won’t stand for the additional costs that would require. These devices don't get security updates like our more expensive computers, and many don't even have a way to be patched. And, unlike our computers and phones, they stay around for years and decades.”

While it may seem like the IoT marketplace is just another trend, it really represents a significant step forward in the reshaping of the IT environment and the convergence of the physical world with the logical.

Miniaturization of these physical devices accompanied by analytics-derived intelligence sets the stage for further integration into the biological world. Rather than being a technology phase, this is the beginning of a formation of a new layer of ICT/ICS technology that will continue to be embedded deeper and deeper into our environment.

These trends challenge the Security Architect in new ways. It is impossible to force the market to integrate high-level security into IoT devices, since this is a market-driven phenomenon, driven by perceptions of cost and benefit, but architecting the layered way in which the overall integration is done holds possibilities for improved security. Once again, detailed risk assessments are required to fully understand the possible impacts for each use-case.

 

17                    Principle 17: Defense in Depth

Greater security is obtained by layering defenses.

Defense in depth is a traditional security approach where each resource is secured to the greatest possible degree by multiple layers or levels of security. This is a resilient approach which assumes that if one of the layers is breached the others will protect the integrity of the system.

These layers must be independent of each other to be effective. Violation of this principle results in designs with multiple single points of failure. For defense in depth to be effective, the layers should be made up of different types of controls, not multiple layers of the same type.

When multiple layers of the same technology are used as defenses, the possibility of a class break exists that will allow a single attack vector to penetrate them all. Multiple layers of different technologies and processes require a much more sophisticated attack using a variety of tools and methods. In the example on the left in Figure 10, stealing properly secured data requires either physical access to the device or penetrating the network firewall. The device may also be protected by WDE (ineffective if the machine is already running), and then operating system file access and application role access controls.

And last there may be a layer of encryption, which might leverage the application to manage the encryption keys. Depending on the applications and physical intrusion versus a network attack, these six layers might collapse down to three or four, but they are still distinct protections that will require different attack strategies.

The effectiveness of layers based on similar technology versus layers based on different types of controls is illustrated below. The vertical layers on the left represent true defense in depth, while multiple controls of the same type are shown on the right as horizontal layers.

Figure 10: Defense in Depth Models

Compartmentalization, a form of defense in depth, is a way of controlling access to resources that leverages controls to isolate the resources into distinct groups and apply protections to those groups. Systems should be designed with protection mechanisms that isolate resources. Compartmentalization also makes it easier to implement least privilege (see Principle 18: Least Privilege).

Another decision to be made when employing defense in depth, or even one layer of defense, is how to handle the failure of a control. The decision to fail open or closed depends on the sensitivity of the resource and requirements for the service it supports. This is an area where Finite State Machine (FSM) modeling may be useful to explore all of the possible states into which a system may fail.

If availability is more important than confidentiality, then the decision may be made to have the security control fail open, allowing access. If confidentiality is more important the decision may be made to have the control fail closed, blocking access. This decision should be made as part of the risk analysis and control selection process.

Both prevention and detection have important roles in a security system. Systems should be capable of detecting attacks that cannot be prevented. Detection of attacks is also a security control and valid defensive layer. Figure 11 shows an example of a detailed defense in depth multi-tiered control strategy.

Picture 13

Figure 11: Defense in Depth Using a Multi-Tiered Control Strategy

 

18                    Principle 18: Least Privilege

Principals (people, things, processes, etc.) shall be granted only the rights necessary to perform their authorized tasks.

The purpose of least privilege is to ensure that each subject in a system or user of a resource must be granted only the most restrictive set of privileges or least amount of access necessary to perform authorized tasks. Application of this principle lowers the overall attack surface and limits the damage that can result from accidents, errors, or unauthorized uses. Systems designed to enforce least privilege reduce both accidental and intentional exposure. Violations of this principle combined with other vulnerabilities can result in significant exposure to risk.

The ability to deploy least privilege systems and services depends to some degree on technology and procedures that can perform fine-grained access control. And fine-grained access control requires metadata about both the resources or assets and the end users that is captured or stored in such a way that it can be used by the access control system to make access decisions regarding those resources.

A specialized form of least privilege that also derives from defense in depth is the concept of separation of duties. Separation of duties is a practice that divides critical tasks into multiple roles to ensure that they cannot be compromised by a single individual, whether by accident or by intent. As an example, an application may have an administrator role that is used to populate end-user accounts but is not able to exercise the application function. The end users who can run the application are not allowed to add, change, or remove accounts.

Separation of duties is not just a least privilege approach for managing people but is also valuable in stopping or limiting the impact of cyber-attacks and is one of the principles behind highly secure operating systems such as Trusted Linux®.

 

19                    Principle 19: Access Control

All resources of value shall be protected by scalable access control mechanisms.

Effective, scalable access control systems are based on a number of design principles. As mentioned above in Principle 2: Context, access controls includes three distinct operational processes:

·         Identification (who are you?)

·         Authentication (prove it)

·         Authorization (now that I know who you are, here is what you can do)

These are the typical operational processes with which an end user interacts in order to get access to resources or assets. The general model is shown in Figure 13.

Figure 12: Logical Access Control Model

In addition to identification, authentication, and authorization, there is a set of preparatory processes and decisions that need to be addressed when designing and configuring an access control system. These include:

·         Entitlement, or the business process of determining access rights

·         Provisioning, or encoding the entitlement decisions into machine-readable attributes and policy rules that the access control system can use

·         Enrollment, or the process of instantiating the specific identities of the principals and resources into the access control system

Keeping these processes distinct enhances the flexibility and scalability of the access control system.

The access control system must support governance requirements that include regulatory requirements, contractual obligations, enterprise policies, and other constraints. Capability to support these constraints should be included in the access control system architecture.

·         Entitlement decisions that take into account specific business decisions and rules need to be considered

Entitlement processes are typically a local adaptation of the governance requirements to the specific business application or system under design.

·         Provisioning is the transformation and encoding of governance constraints and entitlement decisions from policies, procedures, and other human-readable media into machine-readable access control rules and resource attributes that are stored in the access control system

·         Enrollment is the process of entering end-user, whether human or machine, metadata into the access control system

Provisioning may be static or dynamic. In a static system, the metadata is pre-provisioned before use. Role-Based Access Control (RBAC) systems are typically statically provisioned. In dynamic access control systems, the end-user metadata is not stored but arrives with the access request and the access decision is made using attributes that arrive in real time. Most access control systems require the ability to use both stored and dynamic end-user metadata to make access control decisions.

Efficiency and scalability requires access control systems to separate the access control decision function from the access control enforcement function. Generally access control decisions are made globally within an enterprise or program of a large enterprise, are communicated using standard protocols, and are managed by the resource owners. But access control enforcement is typically local – the closer it is to the resource the smaller the attack surface – and customized for the resource being protected. Access control decision functions require metadata about both the end user and the protected resource and the ability to make repeatable consistent decisions.

With this in mind, there are five design rules for access control systems that, if followed, will increase their usability, efficiency, interoperability, and scalability. These are:

1.          Access control decisions should be driven by policy. That is, the business rules and other constraints, as described above, should be encoded as machine-readable policies into the access control system. The current standard for expressing authorization rules and performing access control decisions is the XACML standard managed by OASIS. Using machine-readable policies keeps access control decisions consistent.

2.          Access control operations and access decisions should be automated as much as possible. Typically an access control enforcement mechanism will receive the access request and query the access control decision function using a standard protocol like XACML. The access control decision function will respond to the enforcement mechanism with the result of its decision.

3.          Access controls systems should be disintermediated; that is, built from standardized, loosely-coupled components that may be from different vendors. This is especially important when enterprises require fine-grained enterprise-to-enterprise access control capability for joint operations. Typical components include access control decision engines, access control enforcement mechanisms (firewalls, portals, encryption, application role-based controls, etc.), and directories that hold resource and end-user metadata.

4.          These disintermediated components should be connected using industry standard protocols. In addition to XACML for authorization, SAML, also from OASIS, may be used for carrying authentication information. Directories using the Lightweight Directory Access Protocol (LDAP) are commonly used to store and serve up metadata for both authentication and authorization.

5.          As mentioned above, some of the access control services or components may be integrated to provide enterprise-level access control system management. Some services that are typically run at an enterprise scale include common policy management services, common logging and auditing services, and centralized encryption key management services.

Access control systems built using these principles can operate at the network, application, data, and device level and within and between enterprises.

 

20                    Principle 20: Communication Security

Devices and applications shall communicate using open, secure protocols.

The corollary to Principle 16: Device Sovereignty, that devices need to maintain their own security on untrusted networks, is that when they need to communicate using those same untrusted networks, they must do it securely. Another Jericho Forum Commandment (#4), this means that the security requirements of confidentiality, integrity, availability, and reliability should be assessed and built in to protocols as appropriate, not added on later. And it is important to include the requirement that these secure protocols are developed using an open peer review process that will ensure protocol integrity and widespread adoption.

For the Internet, the Internet Engineering Task Force (IETF), as the development arm of the Internet Society, is the primary producer and maintainer of Internet protocols. The IETF both produces security-specific protocols and requires security attributes to be part of non-security Internet protocols.

In today’s over-connected world, no unencrypted transmission can be assumed to have any level of security. In the wake of a number of breaches in Internet security where large volumes of data were captured by adversaries, the IETF has been encouraging the transition from a mostly unencrypted Internet to a mostly encrypted Internet. The IETF has also been replacing or augmenting old protocols that lack security features with new ones to provide the necessary security.

 


Abbreviations

AES            (US NIST) Advanced Encryption Standard

BAP            Business Attribute Profile (The SABSA Institute)

BIOS          Basic Input/Output System

CRL            Certificate Revocation List

CSF            (US NIST) Cybersecurity Framework

DDoS         Distributed Denial of Service

DHS           (US) Department of Homeland Security

DNS           Domain Name System

DNSSEC     Domain Name System Security Extensions

DRM          Digital Rights Management

EaaS           Everything as a Service

ERM           Enterprise Risk Management

FSM           Finite State Machine

GSS-API     Generic Security Service Application Program Interface

ICS             Industrial Control System

ICT             Information and Communications Technology

IETF           Internet Engineering Task Force

IoT             Internet of Things

IPSec          IP Security

IT               Information Technology

LDAP         Lightweight Directory Access Protocol

NFR            Non-Functional Requirement

NIST          (US) National Institute for Standards and Technology

OAuth        Open Authorization

OCSP          Online Certificate Status Protocol

OWASP      Open Web Application Security Project

PDU           Protocol Data Unit

PKI             Public Key Infrastructure

RBAC         Role-Based Access Control

RSA            Rivest-Shamir-Adleman

SAML        Security Assertion Markup Language

SLA            Service-Level Agreement

SOA           Service-Oriented Architecture

SOI             System of Interest

TLS            Transport Layer Security

TPM           Trusted Platform Module

WDE          Whole Disk Encryption

WPA           WiFi Protected Access

XACML     eXtended Access Control Markup Language

Index


access control............................. 44

AES.......................................... 23

AES Counter Mode.................... 23

attribute-based access rules........... 7

authentication......................... 6, 44

authorization.......................... 6, 44

BIOS......................................... 36

business context........................... 4

business intelligence................... 11

business risk profile...................... 4

CAST........................................ 23

Chaos Monkey........................... 25

communication clarity................ 30

communication security.............. 47

compartmentalization................. 41

conflict resolution....................... 28

context......................................... 5

cryptography.............................. 18

CSF........................................... 21

cyber-attack................................. 1

DCA.......................................... 24

DDoS........................................ 26

DECT........................................ 24

defense in depth......................... 40

deperimeterization...................... 38

device sovereignty...................... 38

DHS............................................ 5

disintermediation........................ 17

DNSSEC................................... 22

DSA.......................................... 24

EaaS.................................... 15, 16

ECDSA..................................... 24

Enigma cipher machine.............. 34

enrollment................................. 45

entitlement................................. 44

FSM.......................................... 41

GSS-API................................... 23

holistic approach........................ 14

identification.......................... 6, 44

IEEE P1363............................... 24

IETF.......................................... 47

information security..................... 1

intelligence................................ 10

IoT............................................ 11

IPSec......................................... 22

ISO 31000:2018................ 4, 12, 22

ISO/IEC/IEEE 42010:2011 8, 14, 27

Kerberos.................................... 23

LDAP........................................ 46

least privilege............................. 43

loss prevention............................. 3

MARS....................................... 23

NFR.......................................... 14

NIST SP 800-37......................... 22

NIST SP 800-53......................... 22

OAuth....................................... 23

OCSP........................................ 19

Open FAIR method.................... 22

opinion mining........................... 11

OWASP.................................... 23

PKI........................................... 19

PKI certificate............................ 33

precedence................................. 37

principles..................................... 1

process-driven............................ 27

provisioning............................... 44

RBAC....................................... 45

RC4........................................... 23

RC6........................................... 23

real-time access............................ 6

registration................................... 6

resilience................................... 25

reuse.......................................... 21

RIPEMD................................... 23

risk.............................................. 3

role-based access rules.................. 7

RSA 2048.................................. 24

Salsa20...................................... 23

SAML............................ 13, 23, 46

separation of duties..................... 43

SHA-2....................................... 23

SOI............................................. 8

system environment...................... 8

The SABSA Institute.................... 1

threat actor................................. 11

threat intelligence....................... 11

TLS........................................... 22

TOGAF standard.......................... 1

TPM.......................................... 36

trust........................................... 12

Twofish..................................... 23

usability..................................... 32

viewpoint................................... 14

WDE......................................... 36

Whirlpool.................................. 23

WPA......................................... 22

XACML.................................... 23


 



[1] See A Long Day Steeped in Pomp, History, and Emotion, J. Zeleny, The New York Times (see Referenced Documents).

[2] Derived from Jericho Forum Commandment #3 (see Referenced Documents).

[3] Enterprise Security Architecture, pp.190-205 (see Referenced Documents).

[4] Enterprise Security Architecture, pp.254-265 (see Referenced Documents).

[5] PKI: It’s Not Dead, Just Resting, by Peter Gutmann (see Referenced Documents).

[6] See https://en.wikipedia.org/wiki/ISO/IEC_27000-series.

[7] See https://en.wikipedia.org/wiki/BS_7799.

[8] See https://en.wikipedia.org/wiki/Botnet.

[9] See https://dyn.com.

[10] See https://dyn.com/blog/dyn-analysis-summary-of-friday-october-21-attack/.

[11] Carl Ellison, Ceremonies, CRYPTO 2005, August 16, 2005; see www.iacr.org/conferences/crypto2005/r/48.mov and http://world.std.com/~cme/Ceremonies.ppt.

[12] The Cryptographic Mathematics of the Enigma, by Dr. A.R. Miller (see Referenced Documents).

[13] The Turing Bombe, by Frank Carter (see Referenced Documents).

[14] Future Crimes: Everything is Connected, Everyone is Vulnerable, and What we Can do About it, by Marc Goodman (see Referenced Documents).

[15] See https://en.wikipedia.org/wiki/Stuxnet.

[16] See Crypto-Gram, November 15, 2016, which is a free monthly email digest of posts from Bruce Schneier’s Schneier on Security blog.